昂贵的边界盒注释限制了对象检测任务的开发。因此,有必要专注于更具挑战性的对象检测的更具挑战性的任务。它要求检测器只有几个训练样本识别新型类别的对象。如今,许多采用类似于元学习的培训方式的现有流行方法已经达到了有希望的表现,例如meta r-CNN系列。但是,支持数据仅用作类的注意,以指导每次查询图像的检测。它们彼此的相关性仍未得到解释。此外,许多最近的作品将支持数据和查询图像视为独立分支,而无需考虑它们之间的关系。为了解决这个问题,我们提出了一个动态相关性学习模型,该模型利用查询图像上所有支持图像与目标区域(ROI)之间的关系来构建动态图卷积网络(GCN)。通过使用此GCN的输出调整基本检测器的预测分布,提出的模型是一项硬辅助分类任务,该任务指导检测器隐含地改进类表示。对Pascal VOC和MS-Coco数据集进行了全面的实验。拟议的模型达到了最佳的整体性能,这表明了其学习更多广义特征的有效性。我们的代码可在https://github.com/liuweijie19980216/drl-for-fsod上找到。
translated by 谷歌翻译
以自我为中心的视频为人类行为的高保真建模提供了细粒度的信息。手和互动对象是理解观众的行为和意图的一个关键方面。我们提供了一个标记的数据集,该数据集由11,243张以egentric的图像组成,并在各种日常活动中与手动和物体相互作用的每个像素分割标签。我们的数据集是第一个标记详细的手动触点边界的数据集。我们介绍了一种上下文感知的组成数据增强技术,以适应YouTube Eginbecentric视频的分布。我们表明,我们的强大手动分割模型和数据集可以作为基础工具,以提高或启用几个下游视觉应用程序,包括手状态分类,视频活动识别,3D网格对手相互作用的3D网格重建以及视频的视频介绍。 - 以自我为中心的视频中的对象前景。数据集和代码可在以下网址找到:https://github.com/owenzlz/egohos
translated by 谷歌翻译
使用神经网络编码HyperGraphs的HyperGraph神经网络(HNNS)为建模数据中的高阶关系提供了一种有希望的方法,并进一步解决了基于此类高阶关系的相关预测任务。但是,实践中的高阶关系包含复杂的模式,通常是高度不规则的。因此,设计一个足以表达这些关系的HNN在保持计算效率的同时,通常是一项挑战。受到超图扩散算法的启发,这项工作提出了一种名为ED-HNN的新型HNN体系结构,该结构可证明可以代表任何可以建模广泛的高阶关系的连续均值超差扩散算子。 ED-HNN可以通过将超图的星形扩展与传递神经网络的标准消息相结合来有效地实现。 ED-HNN进一步在处理异性超图和建造深层模型方面表现出了极大的优势。我们评估了在9个现实世界中的HyperGraph数据集上进行节点分类的ED-HNN。 ED-HNN均匀地胜过这9个数据集的最佳基线,并在其中四个数据集中获得了超过2 \%$ \ uparrow $的预测准确性。
translated by 谷歌翻译
Incorporating knowledge graph as side information has become a new trend in recommendation systems. Recent studies regard items as entities of a knowledge graph and leverage graph neural networks to assist item encoding, yet by considering each relation type individually. However, relation types are often too many and sometimes one relation type involves too few entities. We argue that it is not efficient nor effective to use every relation type for item encoding. In this paper, we propose a VRKG4Rec model (Virtual Relational Knowledge Graphs for Recommendation), which explicitly distinguish the influence of different relations for item representation learning. We first construct virtual relational graphs (VRKGs) by an unsupervised learning scheme. We also design a local weighted smoothing (LWS) mechanism for encoding nodes, which iteratively updates a node embedding only depending on the embedding of its own and its neighbors, but involve no additional training parameters. We also employ the LWS mechanism on a user-item bipartite graph for user representation learning, which utilizes encodings of items with relational knowledge to help training representations of users. Experiment results on two public datasets validate that our VRKG4Rec model outperforms the state-of-the-art methods. The implementations are available at https://github.com/lulu0913/VRKG4Rec.
translated by 谷歌翻译
Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks, citation networks, and bioinformatics. One of the most popular models is graph attention networks. They were introduced to allow a node to aggregate information from features of neighbor nodes in a non-uniform way, in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of graph attention networks. We prove multiple results on the performance of graph attention mechanism for the problem of node classification for a contextual stochastic block model. Here the node features are obtained from a mixture of Gaussians and the edges from a stochastic block model. We show that in an "easy" regime, where the distance between the means of the Gaussians is large enough, graph attention is able to distinguish inter-class from intra-class edges, and thus it maintains the weights of important edges and significantly reduces the weights of unimportant edges. Consequently, we show that this implies perfect node classification. In the "hard" regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges. We evaluate our theoretical results on synthetic and real-world data.
translated by 谷歌翻译
多臂匪徒(MAB)提供了一种原则性的在线学习方法,以达到探索和剥削之间的平衡。由于表现出色和反馈学习低,没有学习在多种情况下采取行动,因此多臂匪徒在诸如推荐系统等应用程序中引起了广泛的关注。同样,在推荐系统中,协作过滤(CF)可以说是推荐系统中最早,最具影响力的方法。至关重要的是,新用户和不断变化的推荐项目池是推荐系统需要解决的挑战。对于协作过滤,经典方法是训练模型离线,然后执行在线测试,但是这种方法无法再处理用户偏好的动态变化,即所谓的冷启动。那么,如何在没有有效信息的情况下有效地向用户推荐项目?为了解决上述问题,已经提出了一个基于多臂强盗的协作过滤推荐系统,名为BanditMF。 BANDITMF旨在解决多军强盗算法和协作过滤中的两个挑战:(1)如何在有效信息稀缺的条件下解决冷启动问题以进行协作过滤,(2)强大社会关系域中的强盗算法问题是由独立估计与每个用户相关的未知参数并忽略用户之间的相关性引起的。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译